We propose a new neural network design paradigm Reversible Column Network (RevCol). The main body of RevCol is composed of multiple copies of subnetworks, named columns respectively, between which multi-level reversible connections are employed. Such architectural scheme attributes RevCol very different behavior from conventional networks: during forward propagation, features in RevCol are learned to be gradually disentangled when passing through each column, whose total information is maintained rather than compressed or discarded as other network does. Our experiments suggest that CNN-style RevCol models can achieve very competitive performances on multiple computer vision tasks such as image classification, object detection and semantic segmentation, especially with large parameter budget and large dataset. For example, after ImageNet-22K pre-training, RevCol-XL obtains 88.2% ImageNet-1K accuracy. Given more pre-training data, our largest model RevCol-H reaches 90.0% on ImageNet-1K, 63.8% APbox on COCO detection minival set, 61.0% mIoU on ADE20k segmentation. To our knowledge, it is the best COCO detection and ADE20k segmentation result among pure (static) CNN models. Moreover, as a general macro architecture fashion, RevCol can also be introduced into transformers or other neural networks, which is demonstrated to improve the performances in both computer vision and NLP tasks. We release code and models at https://github.com/megvii-research/RevCol
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Federated learning (FL) is a promising approach to enable the future Internet of vehicles consisting of intelligent connected vehicles (ICVs) with powerful sensing, computing and communication capabilities. We consider a base station (BS) coordinating nearby ICVs to train a neural network in a collaborative yet distributed manner, in order to limit data traffic and privacy leakage. However, due to the mobility of vehicles, the connections between the BS and ICVs are short-lived, which affects the resource utilization of ICVs, and thus, the convergence speed of the training process. In this paper, we propose an accelerated FL-ICV framework, by optimizing the duration of each training round and the number of local iterations, for better convergence performance of FL. We propose a mobility-aware optimization algorithm called MOB-FL, which aims at maximizing the resource utilization of ICVs under short-lived wireless connections, so as to increase the convergence speed. Simulation results based on the beam selection and the trajectory prediction tasks verify the effectiveness of the proposed solution.
translated by 谷歌翻译
道路网络的图结构对于自动驾驶系统的下游任务,例如全球计划,运动预测和控制至关重要。过去,公路网络图通常由人类专家手动注释,这是耗时且劳动力密集的。为了获得更好的有效性和效率的道路网络图,需要进行自动的路网图检测方法。先前的作品要么是后处理的语义分割图,要么提出基于图的算法以直接预测道路网络图。但是,以前的作品遭受了硬编码的启发式处理算法和劣质最终性能。为了增强先前的SOTA(最新方法)方法RNGDET,我们添加了一个实例分割头,以更好地监督模型培训,并使模型能够利用骨干网络的多尺度功能。由于新提出的方法从RNGDET改进,因此命名为RNGDET ++。所有方法均在大型公开数据集上进行评估。 RNGDET ++在几乎所有度量分数上都优于基线模型。它将拓扑正确性APL(平均路径长度相似性)提高了3 \%。演示视频和补充材料可在我们的项目页面\ url {https://tonyxuqaq.github.io/projects/rngdetplusplus/}中获得。
translated by 谷歌翻译
随着自动驾驶汽车的快速发展,目击者对高清地图(HD地图)的需求蓬勃发展,这些地图(HD地图)在自主驾驶场景中提供了可靠且强大的静态环境信息。作为高清图中的主要高级元素之一,道路车道中心线对于下游任务(例如预测和计划)至关重要。人类注释器手动注释车道中心线高清图是劳动密集型,昂贵且效率低下的,严重限制了自动驾驶系统的广泛应用和快速部署。以前的工作很少探索中心线高清图映射问题,这是由于拓扑复杂和道路中心线的严重重叠问题。在本文中,我们提出了一种名为CenterLinedet的新方法,以自动创建Lane Centrine HD地图。通过模仿学习对CenterLinedet进行训练,并可以通过使用车辆安装的传感器进行迭代有效地检测到车道中心线的图。由于应用了类似DITR的变压器网络,CenterLinedet可以处理复杂的图形拓扑,例如车道相交。在大型公开数据集Nuscenes上评估了所提出的方法,并通过比较结果很好地证明了CenterLinedet的优势。本文附有一个演示视频和一个补充文档,可在\ url {https://tonyxuqaq.github.io/projects/centerlinedet/}中获得。
translated by 谷歌翻译
尖峰神经网络(SNN)引起了脑启发的人工智能和计算神经科学的广泛关注。它们可用于在多个尺度上模拟大脑中的生物信息处理。更重要的是,SNN是适当的抽象水平,可以将大脑和认知的灵感带入人工智能。在本文中,我们介绍了脑启发的认知智力引擎(Braincog),用于创建脑启发的AI和脑模拟模型。 Braincog将不同类型的尖峰神经元模型,学习规则,大脑区域等作为平台提供的重要模块。基于这些易于使用的模块,BrainCog支持各种受脑启发的认知功能,包括感知和学习,决策,知识表示和推理,运动控制和社会认知。这些受脑启发的AI模型已在各种受监督,无监督和强化学习任务上有效验证,并且可以用来使AI模型具有多种受脑启发的认知功能。为了进行大脑模拟,Braincog实现了决策,工作记忆,神经回路的结构模拟以及小鼠大脑,猕猴大脑和人脑的整个大脑结构模拟的功能模拟。一个名为BORN的AI引擎是基于Braincog开发的,它演示了如何将Braincog的组件集成并用于构建AI模型和应用。为了使科学追求解码生物智能的性质并创建AI,Braincog旨在提供必要且易于使用的构件,并提供基础设施支持,以开发基于脑部的尖峰神经网络AI,并模拟认知大脑在多个尺度上。可以在https://github.com/braincog-x上找到Braincog的在线存储库。
translated by 谷歌翻译
利用伪标签(例如,类别和边界框)由教师探测器产生的未注释的对象,已经为半监督对象检测(SSOD)的最新进展提供了很多进展。但是,由于稀缺注释引起的教师探测器的概括能力有限,因此产生的伪标签通常偏离地面真理,尤其是那些具有相对较低分类信心的人,从而限制了SSOD的概括性能。为了减轻此问题,我们为SSOD提出了一个双伪标签抛光框架。我们没有直接利用教师探测器生成的伪标签,而是首次尝试使用双抛光学习来减少它们偏离地面真相的偏差,其中两个不同结构化的抛光网络是精心开发和培训的分别在给定注释对象上的类别和边界框的真相。通过这样做,两个抛光网络都可以通过基于最初产生的伪标签充分利用其上下文知识来推断未注释的对象的更准确的伪标签,从而提高了SSOD的概括性能。此外,可以将这种方案无缝地插入现有的SSOD框架中,以进行端到端学习。此外,我们建议将抛光的伪类别和未注释的对象的边界框,用于单独的类别分类和SSOD中的边界框回归,这使得在模型训练过程中可以引入更多未经许可的对象,从而进一步提高了性能。 Pascal VOC和MS Coco基准测试的实验证明了该方法比现有最新基准的优越性。
translated by 谷歌翻译
车辆到所有(V2X)网络已使自主驾驶中的合作感达到了协作感,这是对独立情报的根本缺陷的有前途的解决方案,包括盲区和远距离感知。但是,缺乏数据集严重阻碍了协作感知算法的发展。在这项工作中,我们发布了海豚:用于协作感知的数据集,可以使和谐且相互联系的自动驾驶,这是一个新的模拟大规模的各种大规模的各种赛车多模式多模式自动驾驶数据集,该数据集为互连为互连的开创性基准平台提供自动驾驶。海豚在六个维度上优于当前数据集:从车辆和道路侧单元(RSU)(RSUS)的临时图像和点云,启用车辆到车辆(V2V)和车辆到基础设施(V2I)的协作感知; 6具有动态天气条件的典型场景使各种互连的自动驾驶数据集最多;精心选择的观点,提供关键区域和每个对象的全部覆盖范围; 42376帧和292549个对象,以及相应的3D注释,地理位置和校准,构成了最大的协作知觉数据集;全高清图像和64线激光雷达构建高分辨率数据,并具有足够的详细信息;组织良好的API和开源代码可确保海豚的可扩展性。我们还构建了2D检测,3D检测和关于海豚的多视图协作任务的基准。实验结果表明,通过V2X通信的原始融合方案可以帮助提高精度,并在RSU存在时减少昂贵的LiDAR设备的必要性,这可能会加速相互联系的自动驾驶车辆的普及。现在可以在https://dolphins-dataset.net/上获得海豚。
translated by 谷歌翻译
在为临床应用设计诊断模型时,至关重要的是要确保模型在各种图像损坏方面的稳健性。在此,建立了易于使用的基准,以评估神经网络在损坏的病理图像上的性能。具体而言,通过将九种类型的常见损坏注入验证图像来生成损坏的图像。此外,两个分类和一个排名指标旨在评估腐败下的预测和信心表现。在两个结果的基准数据集上进行了评估,我们发现(1)各种深神经网络模型的准确性降低(两倍是清洁图像上的误差的两倍)和对损坏图像的不可靠置信度估计; (2)验证和测试错误之间的相关性较低,同时用我们的基准替换验证集可以增加相关性。我们的代码可在https://github.com/superjamessyx/robustness_benchmark上找到。
translated by 谷歌翻译
适应不断发展的环境是所有自动驾驶系统不可避免地面临的安全挑战。但是,现有的图像和视频驾驶数据集未能捕获现实世界的可变性质。在本文中,我们介绍了最大的多任务合成数据集,用于自动驾驶,转移。它显示了云彩,雨水强度,一天中的时间以及车辆和行人密度的离散和连续变化。Shift采用全面的传感器套件和几个主流感知任务的注释,可以调查在域转移水平越来越高的感知系统性能下降,从而促进了持续适应策略的发展,以减轻此问题并评估模型的鲁棒性和一般性。我们的数据集和基准工具包可在www.vis.xyz/shift上公开获得。
translated by 谷歌翻译